Bayesian Optimisation with Continuous Approximations
ثبت نشده
چکیده
To present the theoretical results for GP-UCB, we begin by defining the Maximum Information Gain (MIG) which characterises the statistical difficulty of GP bandits. Definition 2. (Maximum Information Gain (Srinivas et al., 2010)) Let f ∼ GP(0, φX ). Consider any A ⊂ R and let A′ = {x1, . . . , xn} ⊂ A be a finite subset. Let fA′ , A′ ∈ R such that (fA′)i = f(xi) and ( A′)i ∼ N (0, η). Let yA′ = fA′ + A′ . Denote the Shannon Mutual Information by I . The Maximum Information Gain of A is
منابع مشابه
Multi-fidelity Bayesian Optimisation with Continuous Approximations
Bandit methods for black-box optimisation, such as Bayesian optimisation, are used in a variety of applications including hyper-parameter tuning and experiment design. Recently, multifidelity methods have garnered considerable attention since function evaluations have become increasingly expensive in such applications. Multifidelity methods use cheap approximations to the function of interest t...
متن کاملPiecewise Linear Approximations of Nonlinear Deterministic Conditionals in Continuous Bayesian Networks
To enable inference in continuous Bayesian networks containing nonlinear deterministic conditional distributions, Cobb and Shenoy (2005) have proposed approximating nonlinear deterministic functions by piecewise linear ones. In this paper, we describe two principles and a heuristic for finding piecewise linear approximations of nonlinear functions. We illustrate our approach for some commonly u...
متن کاملMixtures of Polynomials in Hybrid Bayesian Networks with Deterministic Variables
The main goal of this paper is to describe inference in hybrid Bayesian networks (BNs) using mixtures of polynomials (MOP) approximations of probability density functions (PDFs). Hybrid BNs contain a mix of discrete, continuous, and conditionally deterministic random variables. The conditionals for continuous variables are typically described by conditional PDFs. A major hurdle in making infere...
متن کاملInference in hybrid Bayesian networks using mixtures of polynomials
The main goal of this paper is to describe inference in hybrid Bayesian networks (BNs) using mixture of polynomials (MOP) approximations of probability density functions (PDFs). Hybrid BNs contain a mix of discrete, continuous, and conditionally deterministic random variables. The conditionals for continuous variables are typically described by conditional PDFs. A major hurdle in making inferen...
متن کاملBayesian Learning in Undirected Graphical Models: Approximate MCMC Algorithms
Bayesian learning in undirected graphical models—computing posterior distributions over parameters and predictive quantities— is exceptionally difficult. We conjecture that for general undirected models, there are no tractable MCMC (Markov Chain Monte Carlo) schemes giving the correct equilibrium distribution over parameters. While this intractability, due to the partition function, is familiar...
متن کامل